Monday 31 March 2014

OOoooooohhh Big Oh

Oh Big Oh... Yup I don't have too much to say about that other than what the heck?!

I understand most of the run times. I can follow why an algorithm is of running time n^2, n or constant - that makes complete and total sense. 

What I don't understand is the lovely implementation of the Big Oh. Why and what are my main questions. 

So with further research - I have come to understand that the Big Oh notation is a way to describe the 'limiting behavior' of a function. In particular, within the computer science domain, the big Oh notation is used to clarify how algorithms respond (i.e.: their running time or work space required)

On topic with the previous sorting post, in this case we are concerned about the Big Oh notation which refers to the growth rate in regards to their running time. 
Ex// If we double the length of the list what happens to the run time

As for as running times are concerned - the hierarchy is as follows:

0(1) <= O(lgn) <= O(n) <= O(n^2) <= O(n^3) <= O(2^n) <= O(n^n) 

Sorting

Sorting is the process of placing elements from a collection in some kind of order. An example of such could be placing the numbers in a list in ascending or descending order.  
Various sorting algorithms have been developed, although the debate of which algorithm is the most efficient is still up for debate.

One sorting algorithm is the bubble sort. It compares adjacent items and exchanges those that are out of the particular order.
Because it goes through each element in the list, the worst case run time is O(n^2) meaning that the elements are in reverse order of that in which you want them to be. 

The best case run time is O(n) -> this means that every item in the list is in the order in which you want them to be, and the only action that has to be taken is comparing each index n. 


Selection sort

Selection sort is another sorting algorithm, which improves on bubble sort in simplicity yet has a worse run time. 

The algorithm divides the input list in to two parts : the first half being the sorted section and the second half the unsorted. 
The algorithm then goes over the list and finds the biggest, smallest, basically the next wanted value and places it at the end of the sorted section. 
Selection sort runs in a way that once sorted - the items get placed in their final place. 

The run time behind selection sort is O(n^2) for both the best and the worst case. 

We also learned a couple of new sorting algorithm such as merge sort

Merge sort is different that the other algorithms because it splits the list into two separate lists and then recursively calls the merge sort on the sub lists. Once they hit the base case: which is a list of length one , they 'merge' the two lists, placing the objects in order. 

The run time for merge sort is one of the most efficient algorithm known so far - running at a constant O(n log n) for both best and worst cases. 



Friday 28 March 2014

Binary Search Tree

Binary Search Trees are an implementation of LinkedList. 

We have seen implementation of List representation of trees. Much like LinkedList, they contain a node which has a root(contains information) and then a self.left which represents the left branch of the tree and self.right which represents the right branch of the tree. 

Ex// 
Tree[3, Tree[5,Tree[6, None,None], None], Tree[8,None, None]]

                                                  3
                                               /     \        
                                             5        8
                                           /   
                                         6      


The implementation of a binary search tree is very similar in its implementation. The requirements of a binary search tree are that there are no repeating node values and any values which are less than the root go to the left branch and a node with a value greater than the root must go to the right branch. This repeats for each subtree. 




More linked list

Last week I though I understood Linked Lists but this week's lab proved that I really did not grasp the concept of linked list.

For some reason, my mind can't grasp the concept of the LLNode referring to a new LLNode, and the concept of replacing such link to reference a new one really blows my mind.

I will try looking it up and read more about it this week and hopefully have more to say about it next week!

Linked List --> Linked List --> Linked List

This week, Linked List were introduced.
Linked lists are made up of Nodes. Each node contains a reference to the next node in the list. In addition, each node contains a unit of data which contains the information such as a string, a integer or a symbol.
A linked list is considered a recursive data structure because it has a recursive definition. The linked list calls itself every time it moves onto the next node.

A linked list is either:
 - the empty list, represented by None, or
 - a node with a root and a reference to the next node.

Lists are useful because they provide a way to assemble multiple objects into a single unit often referred to as a collection. They are, in many ways, very similar to list because they contain a collection of information. What makes them arguably better is that they take less time to manipulate. They are more easily traversed that lists. Modifying a linked list also takes less time that modifying a list.
Take the example of wanting to add the integer 1 to the beginning of a list.

list = [2,3,4,5]

LinkedList = LLNode(2, LLNode(3, LLNode(4, LLNode(5, None))))

In the list, you would have to add the item 1 to index 0, and then move each index to a new one.
In the LinkedList, one simply has to take the head of the list LLNode(2, LLNode(3, Etc.)) and save that as the link and then create a new head LLNode(1) which now points to the new link.


LLNode = LLNode(1, LLNode(2, LLNode(3, LLNode(4, LLNode(5, None)))))

Saturday 1 March 2014

Recursion(Recursion(Recursion)))

Recursive code... where to begin...?

Think of recursive code as one big circle - in order for a function to run it calls on itself.

Take this example that I found
source: http://www.loyalty.org/~schoen/python/class-4-recursion.txt 

def factorial(n):
 if n == 0:
  return 1
 else;
  return n*factorial(n-1)

The function factorial(n) takes a number n, and then calls on itself with in its body.
The body also requires a base case, in this case if n == 0, which stops the recursive circle. If not the circle will continue until eventually the computer runs out of memory space - which leads to a very ugly output in the python shell. Other wise, it runs through the code until the (n-1) conditions leads to n being equal to 0, in which case it returns 1.

Towers of Hell... i mean Hanoi

Ok... so last week I claimed I found the Tower of Hanoi interesting. Working on the assignment has led me to have very, very , very, VERY different feelings towards it. Or towards the assignment in general.

First of all it was the error message we were getting in part 1... Error: type CheeseView is not highlight-able..
A) what is a type CheeseView
B) Why is it NOT HIGHLIGHT-ABLE?!?

A little while later that problem was solved thanks to the Q&A page on Piazza. Turns out we were replacing the string that we had built with a new CheeseView - 2 hours of debugging later our cheese moves!!

And then we get to the small problem of the recursive part of this assignment.
Oh Reve's puzzle how I hate you.
I understand the code.. I understand what it is supposed to do... I just don't understand why it is not working!!
move4(n, origin, second, destination, third)
Why is it going to the second?

Another 3hours of debugging later and a camp out session at office hours we were missing a helper funciton to move the cheese using three stools - and that is where the Tower of Hanoi code comes in.
The code works! and now all that there is left to do is edit.
YAY!